Maintain and troubleshoot data integration pipelines to ensure stable data flow into AI and analytics systems Support model development by assisting with training, validation, and optimization of machine learning workflows Conduct data analysis to extract insights and provide clear reports supporting R&D research questions Solve technical challenges related to data access, pipeline performance, and software limitations Ensure continuity of ongoing projects by aligning closely with the core team and delivering on timelines Perform image analysis and prepare datasets required for scientific and ML use cases Manage and improve ETL processes to ensure data quality, structure, and availability Document workflows, pipeline changes, and analytical steps to ensure clarity and reproducibility Academic background in computer science, data science, engineering, or a related quantitative field Strong proficiency in Python with expertise in scientific and analytical libraries Skilled in SQL and working with relational databases Understanding of ETL concepts and practical experience working with data pipelines Solid foundation in machine learning principles and model lifecycle Ability to perform image analysis for scientific or research applications Strong communication and interpersonal skills with the ability to collaborate in a technical team Independent, structured problem-solver with a commitment to clear documentation and FAIR data practices Opportunity to contribute directly to active R&D projects with immediate real-world impact Hands-on involvement in AI, machine learning, and data integration challenges in a scientific environment Close collaboration with a small, highly skilled technical team Ihr Kontakt Referenznummer 863771/1 Kontakt aufnehmen Telefon:+41 44 225 50 00 E-Mail: positionen@hays.ch Anstellungsart Freiberuflich für ein Projekt
Maintain and troubleshoot data integration pipelines to ensure stable data flow into AI and analytics systemsSupport model development by assisting with training, validation, and optimization of machine learning workflowsConduct data analysis to extract insights and provide clear reports supporting R&D research questionsSolve technical challenges related to data access, pipeline performance, and software limitationsEnsure continuity of ongoing projects by aligning closely with the core team and delivering on timelinesPerform image analysis and prepare datasets required for scientific and ML use casesManage and improve ETL processes to ensure data quality, structure, and availabilityDocument workflows, pipeline changes, and analytical steps to ensure clarity and reproducibility Academic background in computer science, data science, engineering, or a related quantitative fieldStrong proficiency in Python with expertise in scientific and analytical librariesSkilled in SQL and working with relational databasesUnderstanding of ETL concepts and practical experience working with data pipelinesSolid foundation in machine learning principles and model lifecycleAbility to perform image analysis for scientific or research applicationsStrong communication and interpersonal skills with the ability to collaborate in a technical teamIndependent, structured problem-solver with a commitment to clear documentation and FAIR data practices Opportunity to contribute directly to active R&D projects with immediate real-world impactHands-on involvement in AI, machine learning, and data integration challenges in a scientific environmentClose collaboration with a small, highly skilled technical team Ihr Kontakt Referenznummer 863771/1 Kontakt aufnehmen Telefon:+41 44 225 50 00 E-Mail: positionen@hays.ch Anstellungsart Freiberuflich für ein Projekt
Principal Accountabilities: Collaboration in projects of the European Data Science & Advanced Analytics Team.Concept, design, development and execution of complex innovative AI/Machine Learning solutions as well as execution and implementation of concept studies using advanced statistical methods.Development of deep learning models for structured medical concept extraction from unstructured data.Productionalization of machine learning algorithms in Big Data platforms.Application of modern data mining and machine learning techniques in connection with Healthcare Big Data to identify complex relationships and link heterogeneous data sources.Advanced usage of Large Language Models for summarization, chatbot, entity extraction etc.Develop foundational Deep Learning Models for assets and patients.Builds and trains new production grade algorithms that can learn from complex, high dimensional data to uncover patterns from which machine learning models and applications can be developed. Our Ideal Candidate Will Have: Master’s degree in Computer Science, Mathematics/Statistics, Economics/Econometrics or related field.Substantial years of professional experience in quantitative data analysis or PhD with at least 1 year of relevant professional experience with research in machine learning algorithms.Very good knowledge and in depth understanding of Machine Learning methods, both classical and deep learning models.Relevant experience with Natural Language Processing (NLP) models for extracting structured concepts from unstructured free text, including the design, training, and evaluation of information‑extraction pipelines.Very strong technical capability in Python, SQL, Hadoop ecosystem.Experience applying AI/Machine Learning methods to business questions.Very good knowledge of the higher statistical and econometric methods in theory and practice.Experience with handling Big Data.Ability to write clean, reusable, production-level codeExcellent communication skills (written and oral) including technical aspects of a project, ability to develop usable documentation, results interpretation and business recommendations.Strong analytic mindset and logical thinking capability, strong QC mindset.Knowledge of pharmaceutical market and experience with pharmaceutical data (medical, hospital, pharmacy, claims data) would be a plus, but not a must.Self-responsible for managing projects.Fluency in German & English.
What You’ll Do Drive Measurable Business ImpactIndependently lead AI and analytics initiatives that generate tangible business valueActively contribute to and influence the company’s strategic directionApply Advanced Analytics & AIDevelop and apply advanced statistical methods in an agile environmentWork on business-critical questions using: Regression models, Time series analysis and Machine learning & AI algorithmsCollaborate cross-functionally or drive initiatives independentlyTurn Data into ActionDesign and execute analyses on large datasetsTranslate findings into clear, actionable recommendationsWork across the full spectrum — from Excel-based analysis to deep learning models Build Scalable AI SolutionsDevelop and own customized Data Science and AI solutionsWork with SQL on Azure and Snowflake platforms, Python is nice to haveLead projects end-to-end: from Proof of Concept (PoC) to fully operational production modelsCreate audience-tailored presentationsTranslate complex analytical insights into clear business languageDeliver compelling data storytelling to support decision-making at all levels What makes you stand out Minimum 3 years of experience in Data Science, AI, Advanced Analytics, or similar rolesProven ability to generate measurable business value through data-driven solutionsStrong expertise in statistical modeling and machine learning techniquesHands-on experience with: Python, SQL, Azure and SnowflakeExperience building scalable models from PoC to productionStrong communication and stakeholder management skillsAbility to explain complex topics to both technical and non-technical audiencesBachelor’s or Master’s degree in Finance, Statistics, Computer Science, Mathematics, or a related quantitative field We are looking forward to your application and to applicants who enrich our diverse culture!
What makes you stand out You hold a Master’s degree in Economics, Mathematics, Statistics, Computer Science, or another related quantitative field.You bring at least 3 years of professional Data Science experience and have a proven track record of bringing models from the lab into real‑world production use cases.You are proficient in Python and SQL and have hands‑on experience building solutions on Azure and working with Snowflake.You have a deep understanding of statistical methods and apply them effectively in an agile, fast‑paced environment.You think in products, not experiments – model versioning, testing, and performance monitoring are second nature to you.You communicate complex topics clearly and with impact, presenting to different audiences confidently; you are fluent in English, and German skills are a plus.
elasticsearch AWS Python Google BigQuery Google Cloud Platform Numpy Pandas Gitlab What you will do Design and develop innovative algorithms to power a personalized shopping experience, leveraging cutting-edge machine learning techniques Deploy your solutions into production, taking full ownership and ensuring high performance and scalability Combine your data science expertise with a pragmatic, agile approach to find innovative solutions and drive measurable results within a fast-paced environment Challenge the status quo by identifying areas for improvement in existing retrieval and reranking systems, particularly those relying heavily on business logic, and propose data-driven solutions Thrive in a dynamic, fast-paced environment with a flat hierarchy, where your ideas and contributions can make a real difference Who you are Proficiency in Python or experience with at least one scientific computing language (e.g., MATLAB, R, Julia, C++) Strong SQL skills with experience in analytical or transactional database environments Theoretical understanding of machine learning principles, coupled with a hands-on approach to building and iterating on models Proven experience in building and deploying machine learning solutions that deliver tangible business value Strong understanding of data structures, algorithms, and tools for efficiently handling large datasets (e.g. pandas, numpy, dask, arrow, polars, …) Experience designing, building, and managing data pipelines Familiarity with cloud-based model training and serving platforms (e.g., GCP Vertex AI, Amazon SageMaker) Solid understanding of statistical methods for model evaluation Big Data: Experience analyzing large datasets using statistical and machine learning techniques DevOps: Familiarity with CI/CD tools (e.g., GitLab CI/CD, Hashicorp Terraform) is a plus Generative AI: Experience with generative AI and agentic frameworks (e.g., LangChain, ADK, CrewAI, Pydantic AI, …) is a plus Understanding of recommendation, retrieval and reranking systems in e-commerce and retail is a plus Excellent written and verbal communication skills in English Ability to effectively communicate complex machine learning concepts to both technical and non-technical stakeholders Proven ability to collaborate effectively within a team to establish standards and best practices for deploying machine learning models A proactive approach to knowledge sharing and fostering a quick development environment Nice to have Experience with BigQuery Knowledge of time series and (graph) neural network models Familiarity with statistical testing and Gaussian Processes Strong Knowledge of Computer Vision libraries, (e.g. OpenCV, TensorFlow, PyTorch) Experience maintaining Machine Learning pipelines through MLOps frameworks (e.g.
YOUR TASKS Design and maintain scalable data architectures and pipelines Collaborate with cross‑functional teams on data requirements Implement data quality and governance processes Drive adoption of modern data engineering technologies Guide and coach junior data engineers YOUR PROFILE Degree in Computer Science, Engineering or related field Minimum of 5+ years experience in data engineering, including architecture Expertise in ETL, Data Lakes and data warehousing Strong SQL, SSIS, SSAS and Azure SQL/databricks skills Experience with CI/CD (Azure DevOps, git) Programming skills in R, Python or Scala Very good English and strong collaboration skills YOUR BENEFITS Nordex offers a range of attractive benefits – here’s a selection of what you can look forward to.
YOUR TASKS Design and maintain scalable data architectures and pipelines Collaborate with cross‑functional teams on data requirements Implement data quality and governance processes Drive adoption of modern data engineering technologies Guide and coach junior data engineers YOUR PROFILE Degree in Computer Science, Engineering or related field Minimum of 5+ years experience in data engineering, including architecture Expertise in ETL, Data Lakes and data warehousing Strong SQL, SSIS, SSAS and Azure SQL/databricks skills Experience with CI/CD (Azure DevOps, git) Programming skills in R, Python or Scala Very good English and strong collaboration skills YOUR BENEFITS Nordex offers a range of attractive benefits – here’s a selection of what you can look forward to.
What makes you stand out You hold a degree in Business Informatics, Data Science, Computer Science, Industrial Engineering, Business Administration, or a comparable field.You have several years of experience in building, operating, or further developing data‑driven products—ideally in a sales, marketing, or customer service context.You have solid knowledge in designing and developing cloud‑based data products such as data warehouses, semantic layers, analytical models, and reporting and analytics solutions.You possess strong knowledge of data architectures, data engineering, data science, and data governance.You bring experience in leading interdisciplinary teams both functionally and disciplinarily—ideally including Data Engineers, Data Scientists, Product Owners, and Data Governance roles.You demonstrate strong analytical and conceptual thinking skills and the ability to prepare complex topics in a structured and understandable way.You communicate effectively and are able to bridge different target groups (business & tech).You work with strong execution and results orientation while ensuring high data quality.You have excellent German and English skills, both written and spoken.
Ihre Aufgaben: Entwicklung und Implementierung KI-gestützter Anwendungen mit Schwerpunkt auf Front-End-Lösungen (Machine Learning, NLP, Computer Vision, Generative AI, RAG-Systeme)Design und Programmierung von Softwaremodulen unter Nutzung moderner Sprachen und Frameworks (Python, Java, .NET, TensorFlow, PyTorch)Aufbau und Integration von Datenpipelines für Training und Betrieb von KI-Modellen auf Plattformen wie der Eigenentwicklung AI.nsteinKonzeption und Entwicklung fachspezifischer KI-Agents mittels Prompt-EngineeringZusammenarbeit mit Data Scientists und KI-Architekten bei der Überführung von KI-Modellen in produktive UmgebungenSicherstellung hoher Code-Qualität durch Clean Code-Prinzipien, Unit Tests und automatisierte TestverfahrenDurchführung von Fehleranalysen sowie kontinuierliche Optimierung und Weiterentwicklung bestehender AnwendungenTechnische Dokumentation von Entwicklungen, Architekturen und ErgebnissenEvaluation und Bewertung neuer KI-Technologien, Tools und Frameworks Ihre Qualifikation: Abgeschlossenes Studium der Informatik, Wirtschaftsinformatik oder eine auf anderem Weg erworbene gleichwertige QualifikationMehrjährige Berufserfahrung in der Softwareentwicklung mit modernen Programmiersprachen (Python, Java, C#, JavaScript)Fundierte Kenntnisse im Bereich Künstliche Intelligenz (Machine Learning, NLP, Generative AI, RAG-Systeme)Ausgeprägte Erfahrung im Prompt-Engineering und in der Entwicklung von KI-AgentsPraktische Erfahrung mit KI-/Datenplattformen (z.B.
As part of our team, you will take on the following responsibilities: You develop and maintain cloud‑based data pipelines and data products specifically for the Finance domain, built on our Snowflake database.You integrate financial and transactional data from various sources into our central Data Platform, optimize ETL processes, and ensure high data quality.You design and build new DataFlows and DataSets for Finance use cases and create as well as manage the corresponding Power BI reports.You collaborate closely with Finance Product Owners, Data Scientists, and Business Units to deliver meaningful analytical data products.You take ownership of data governance for all Finance data products and actively drive their further development.You support the modernization of our Finance data architecture and contribute to the migration towards cloud‑ and big‑data‑based technologies. What makes you stand out You hold a degree in Computer Science, Business Informatics, or a comparable qualification.You have strong expertise in SQL databases, including data modeling, table design, and data querying — ideally with experience in Snowflake.You bring experience in developing and integrating data products and are familiar with Azure Data Factory, Python, and modern cloud technologies (preferably Azure).You have hands‑on experience creating meaningful Power BI reports; knowledge of CI/CD or container technologies (e.g., Docker, Kubernetes) is a plus.You work analytically, communicate effectively, collaborate well in teams, and demonstrate a strong hands‑on mentality.You are fluent in English and have very good German skills; you also have a passion for financial databases and enjoy exploring new topics.
Apply statistical methods to detect anomalies, trends and shifts. Your Profile BSc/MSC/Ph.D Degree in Mathematics, Computer Science, Information Technology, Physics or Engineering.At least 5 years experience in the field of data analytics, data mining.Technical expert in the field of data analytics, data mining.Proficient in Python and SQL languages.Experienced with statistical modelling and data mining techniques.Strong problem-solving skills with ability to multi-task and manage multiple projects simultaneously.
Main Responsibilities: Work directly with Sales Engineers, Product Sales Development Managers, and Sales ManagersAnalyze sales data and pricing trends to support improvements in market‑specific pricingTrack online pricing for vacuum pump technology to assess competitive positioningDesign and generate reports, dashboards, and visualizations for the management teamAttend sales meetings, business functions, service calls, and customer visits alongside account managers and mentorsSupport development of internal data strategies to drive pricing and market‑focused sales initiativesBuild an appreciation for how data influences direct and indirect sales and marketing strategiesPerform other related duties as assignedTo succeed, you will need Skills / Knowledge / Experience: Education level: Must be a Junior or Senior majoring in: Computer ScienceManagement Information SystemsBusiness / Marketing AnalyticsIndustrial Engineering Knowledge areas/Skills: Strong analytical and creative problem‑solving skillsProficiency in Excel and/or Python; Tableau or PowerBI experience preferredSelf‑motivated, independent, flexible, organized, and methodicalResults‑driven, accountable, and ambitiousAdaptability in a dynamic, fast‑paced environmentIn return, we offer We believe there is always a better way.